Internet And Network Technologies
CloudOps Explainability
Applying the Explainability Approach to Guide Cloud Implementation
Explainability for Cloud Deployments: Applying Explainability in CloudOps

Applying the Explainability Approach to Guide Cloud Implementation

Course Number:
it_cocoexdj_01_enus
Lesson Objectives

Applying the Explainability Approach to Guide Cloud Implementation

  • discover the key concepts covered in this course
  • describe the concept of AI Explainability and differentiate between AI Explainability and CloudOps Explainability
  • describe CloudOps Explainability and the role it plays in CloudOps implementation for managing multi-cloud solutions
  • define explanatory systems and evaluate them from functional, operational, usability, security, and validation perspectives
  • identify properties that are used to define systems to accommodate explainability approaches and recognize how users interact with explainable systems and what is expected of them
  • recall the effect of explainability on the robustness, security, and privacy aspects of predictive systems and describe approaches of evaluating how well the explanation is understood using qualitative and quantitative validation approaches
  • list explainability techniques that can be used to define operational and functional derivatives of CloudOps including leave one column out, permutation impact, and local interpretable model-agnostic explanations
  • recognize the role of explainability and how it can be applied throughout the process of operating cloud environments and infrastructures to ensure efficient service delivery following the CloudOps paradigm
  • describe the three stages of AI Explainability along with the methodologies that are used in each stage to derive the right CloudOps model for implementation guidance
  • recognize the role of explainability in defining AI-assisted Cloud Managed Services that can be used to manage large cloud enterprise distributed applications
  • list the architectures that can be derived using Explainable Models and that can help share CloudOps or DevOps Model Explainability with the stakeholders to establish better collaboration
  • recognize the role of Explainable AI reasoning paths in building CloudOps workflows that can be trusted by customers, employees, regulators, and other key stakeholders
  • describe the role of CloudOps and DevOps Explainability in mitigating challenges along with the need for management and governance of AI frameworks in CloudOps architectures
  • summarize the key concepts covered in this course

Overview/Description

In this course, you'll explore the concept of AI Explainability, the role of CloudOps Explainability in managing multi-cloud solutions, how to evaluate explanatory systems, and the properties used to define systems to accommodate explainability approaches. You'll look at how users interact with explainable systems and the effect of explainability on the robustness, security, and privacy aspects of predictive systems. Next, you'll learn about the use of qualitative and quantitative validation approaches and the explainability techniques for defining operational and functional derivatives of cloud operation. You'll examine how to apply explainability throughout the process of operating cloud environments and infrastructures, the methodologies involved in the three stages of AI Explainability in deriving the right CloudOps model for implementation guidance, and the role of explainability in defining AI-assisted Cloud Managed Services. Finally, you'll learn about the architectures that can be derived using Explainable Models, the role of Explainable AI reasoning paths in building trustable CloudOps workflows, and the need for management and governance of AI frameworks in CloudOps architectures.



Target

Prerequisites: none

Explainability for Cloud Deployments: Applying Explainability in CloudOps

Course Number:
it_dpcoexdj_01_enus
Lesson Objectives

Explainability for Cloud Deployments: Applying Explainability in CloudOps

  • discover the key concepts covered in this course
  • define the concept of interpretability and explainability and outline how these can be applied to CloudOps
  • list the stages of the design thinking process involved in developing an empathic approach to crafting explainability for varying CloudOps users and stakeholders
  • outline the reasons and methods for building explainability into a CloudOps workflow, with a focus on eliminating the negative impact of IT
  • state the challenges and opportunities of explainable AI and describe its impact on DevOps principles integrated in CloudOps
  • describe the explainability decision tree that can be used to derive the value stream of an existing CloudOps implementation
  • describe the fundamental principles of explainability along with the categories of explanations that help build a CloudOps practice
  • recall the different algorithms that can be used to explain CloudOps practices adopted in the enterprise
  • recognize the benefits of using explainability and specify how it helps configure and implement continuous monitoring and feedback mechanisms
  • recall the role of explainability in achieving continuous ops in public, private, and multi-cloud environments
  • describe the governance strategy that needs to be considered when configuring and deploying explainable cloud applications in multi-cloud environments
  • recognize the features of applied intelligence that help derive an AIOps solution for DevOps, site reliability engineers, and on-call teams to manage CloudOps implementation
  • create a basic explainability workflow using New Relic by creating policies with their associated conditions
  • summarize the key concepts covered in this course

Overview/Description

CloudOps architects need to explain how their often complex, multi-cloud deployment solutions work to a wide variety of audiences - not an easy feat, but one you'll learn to overcome in this course.

You'll start by defining the concept of interpretability and explainability in CloudOps. You'll then outline how to build explainability into a CloudOps workflow, investigating the core explainability principles and benefits along the way.

You'll examine the explainability decision tree used to derive a value stream from an existing CloudOps implementation, the algorithms used to explain CloudOps practices, and the governance strategy for deploying explainable cloud applications in multi-cloud environments.

You'll examine the challenges and opportunities of explainable AI in CloudOps and identify the key applied intelligence features to derive AIOps solutions. You'll end this course by creating a basic explainability workflow using New Relic.



Target

Prerequisites: none

Close Chat Live